79 research outputs found
Smoothed Complexity Theory
Smoothed analysis is a new way of analyzing algorithms introduced by Spielman
and Teng (J. ACM, 2004). Classical methods like worst-case or average-case
analysis have accompanying complexity classes, like P and AvgP, respectively.
While worst-case or average-case analysis give us a means to talk about the
running time of a particular algorithm, complexity classes allows us to talk
about the inherent difficulty of problems.
Smoothed analysis is a hybrid of worst-case and average-case analysis and
compensates some of their drawbacks. Despite its success for the analysis of
single algorithms and problems, there is no embedding of smoothed analysis into
computational complexity theory, which is necessary to classify problems
according to their intrinsic difficulty.
We propose a framework for smoothed complexity theory, define the relevant
classes, and prove some first hardness results (of bounded halting and tiling)
and tractability results (binary optimization problems, graph coloring,
satisfiability). Furthermore, we discuss extensions and shortcomings of our
model and relate it to semi-random models.Comment: to be presented at MFCS 201
Smooth analysis of the condition number and the least singular value
Let \a be a complex random variable with mean zero and bounded variance.
Let be the random matrix of size whose entries are iid copies of
\a and be a fixed matrix of the same size. The goal of this paper is to
give a general estimate for the condition number and least singular value of
the matrix , generalizing an earlier result of Spielman and Teng for
the case when \a is gaussian.
Our investigation reveals an interesting fact that the "core" matrix does
play a role on tail bounds for the least singular value of . This
does not occur in Spielman-Teng studies when \a is gaussian.
Consequently, our general estimate involves the norm .
In the special case when is relatively small, this estimate is nearly
optimal and extends or refines existing results.Comment: 20 pages. An erratum to the published version has been adde
Large Scale Spectral Clustering Using Approximate Commute Time Embedding
Spectral clustering is a novel clustering method which can detect complex
shapes of data clusters. However, it requires the eigen decomposition of the
graph Laplacian matrix, which is proportion to and thus is not
suitable for large scale systems. Recently, many methods have been proposed to
accelerate the computational time of spectral clustering. These approximate
methods usually involve sampling techniques by which a lot information of the
original data may be lost. In this work, we propose a fast and accurate
spectral clustering approach using an approximate commute time embedding, which
is similar to the spectral embedding. The method does not require using any
sampling technique and computing any eigenvector at all. Instead it uses random
projection and a linear time solver to find the approximate embedding. The
experiments in several synthetic and real datasets show that the proposed
approach has better clustering quality and is faster than the state-of-the-art
approximate spectral clustering methods
Lossless fault-tolerant data structures with additive overhead
12th International Symposium, WADS 2011, New York, NY, USA, August 15-17, 2011. ProceedingsWe develop the first dynamic data structures that tolerate δ memory faults, lose no data, and incur only an O(δ ) additive overhead in overall space and time per operation. We obtain such data structures for arrays, linked lists, binary search trees, interval trees, predecessor search, and suffix trees. Like previous data structures, δ must be known in advance, but we show how to restore pristine state in linear time, in parallel with queries, making δ just a bound on the rate of memory faults. Our data structures require Θ(δ) words of safe memory during an operation, which may not be theoretically necessary but seems a practical assumption.Center for Massive Data Algorithmics (MADALGO
The tropical shadow-vertex algorithm solves mean payoff games in polynomial time on average
We introduce an algorithm which solves mean payoff games in polynomial time
on average, assuming the distribution of the games satisfies a flip invariance
property on the set of actions associated with every state. The algorithm is a
tropical analogue of the shadow-vertex simplex algorithm, which solves mean
payoff games via linear feasibility problems over the tropical semiring
. The key ingredient in our approach is
that the shadow-vertex pivoting rule can be transferred to tropical polyhedra,
and that its computation reduces to optimal assignment problems through
Pl\"ucker relations.Comment: 17 pages, 7 figures, appears in 41st International Colloquium, ICALP
2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part
- …